skip to main content


Search for: All records

Creators/Authors contains: "Roy, Tirthankar"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The Shell Creek Watershed (SCW) is a rural watershed in Nebraska with a history of chronic flooding. Beginning in 2005, a variety of conservation practices have been employed in the watershed. Those practices have since been credited with attenuating flood severity and improving water quality in SCW. This study investigated the impacts of 13 different controlling factors on flooding at SCW by using an artificial neural network (ANN)-based rainfall-runoff model. Additionally, flood frequency analysis and drought severity analysis were conducted. Special emphasis was placed on understanding how flood trends change in light of conservation practices to determine whether any relation exists between the conservation practices and flood peak attenuation, as the strategic conservation plan implemented in the watershed provides a unique opportunity to examine the potential impacts of conservation practices on the watershed. The ANN model developed in this study showed satisfactory discharge–prediction performance, with a Kling–Gupta Efficiency (KGE) value of 0.57. It was found that no individual controlling variable used in this study was a significantly better predictor of flooding in SCW, and therefore all 13 variables were used as inputs, which resulted in the satisfactory ANN model discharge–prediction performance. Furthermore, it was observed that after conservation planning was implemented in SCW, the magnitude of anomalous peak flows increased, while the magnitude of annual peak flows decreased. However, more comprehensive assessment is necessary to identify the relative impacts of conservation practices on flooding in the basin. 
    more » « less
  2. Process-based modelling offers interpretability and physical consistency in many domains of geosciences but struggles to leverage large datasets efficiently. Machine-learning methods, especially deep networks, have strong predictive skills yet are unable to answer specific scientific questions. In this Perspective, we explore differentiable modelling as a pathway to dissolve the perceived barrier between process-based modelling and machine learning in the geosciences and demonstrate its potential with examples from hydrological modelling. ‘Differentiable’ refers to accurately and efficiently calculating gradients with respect to model variables or parameters, enabling the discovery of high-dimensional unknown relationships. Differentiable modelling involves connecting (flexible amounts of) prior physical knowledge to neural networks, pushing the boundary of physics-informed machine learning. It offers better interpretability, generalizability, and extrapolation capabilities than purely data-driven machine learning, achieving a similar level of accuracy while requiring less training data. Additionally, the performance and efficiency of differentiable models scale well with increasing data volumes. Under data-scarce scenarios, differentiable models have outperformed machine-learning models in producing short-term dynamics and decadal-scale trends owing to the imposed physical constraints. Differentiable modelling approaches are primed to enable geoscientists to ask questions, test hypotheses, and discover unrecognized physical relationships. Future work should address computational challenges, reduce uncertainty, and verify the physical significance of outputs. 
    more » « less
    Free, publicly-accessible full text available July 11, 2024
  3. null (Ed.)
    We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25–0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as −10% and for BC as large as −50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf. 
    more » « less